Split the detection of tablet devices, touch-enabled devices and keyboard-less devices
Categories
(Core :: Widget: Win32, task, P2)
Tracking
()
People
(Reporter: gsvelto, Unassigned)
References
(Blocks 2 open bugs)
Details
(Whiteboard: [win:touch])
Follow-up from bug 1722208. In that bug I discovered that we conflate the concept of having a touch screen as the input device with having a tablet form-factor, i.e. not having a keyboard. As I discovered in bug 1722208 comment 14 this is a problem. We want to display the on-screen keyboard if we don't have a physical keyboard, and we want to change certain aspects of the layout if the primary input is a touch screen. Unfortunately the two concepts don't necessarily match, here's a few examples:
- A tablet with a keyboard attached, we want to show a tablet UI (because touch input) but no on-screen keyboard
- A convertible slate in tablet mode, we want to show a tablet UI and an on-screen keyboard (but it doesn't work before bug 1722208 on Windows 11 because Microsoft changed the semantics of the
UIViewSettings.UserInteractionMode
property, or maybe it's just buggy) - A desktop with a touch screen but no keyboard (think of an ATM or smart display), it's not a tablet but we most certainly want the tablet UI with larger buttons (because touch) and the on-screen keyboard (because no keyboard)
- A desktop with no touch screen and no keyboard but with a mouse attached. The layout would be the regular desktop one (because mouse) but we'd need to pop up an on-screen keyboard (because no keyboard). While this might seem like a contrived use-case I happen to know at least a user who can use a trackball or a touchpad but has significant problems using a keyboard.
How do we do this? Here's the laundry list:
- We implement a function to detect if a keyboard is present or not. Chrome has some code that enumerates all keyboards, we might use something like that. We don't want to copy everything they're doing because they're also checking if a device is a tablet and assuming it has no keyboard which would yield the wrong result as in bug 1722208.
- We modify the code that decides if we need to show the on-screen keyboard not to use
inTabletMode
anymore, but to use the function we defined in 1. - We audit the code that uses
inTabletMode
to be sure it really cares about the layout and not the touch input (this and this). We might even want to change the name of that property to something else such astouchDisplayDevice
. - We adjust telemetry to split the two concepts
- We implement mouse detection without using
IsTabletDevice()
because that's also subject to the cause of bug 1722208. Or we fixIstabeltDevice()
, more on this later. - We also adjust WinUtils::GetPrimaryPointerCapabilities() to not assume that a tablet has touch input as its primary input. It might have a mouse, and if the user plugged one in he's likely using it.
As for IsTabletDevice()
it's doing a lot of things which are similar to what Chrome does (see here and here). Chrome's version also uses Windows 10 tablet mode to override every other check so it has the same issue as ours. What we need to do is get hold of some devices and actually test what all those devices are returning for the various calls, to have an idea of what's (apparently) reliable and what's not to detect a tablet (or something that's behaving tablet-like).
It might sometimes be counterintuitive. For example this returns false on my device when it's in tablet mode on Windows 11. That's because the AR_LAPTOP
bit appears to be set but it shouldn't be (so maybe it's the same underlying issue causing the UIViewSettings.UserInteractionMode
to be wrong). Additionally there are desktop screens that can be rotated in portrait mode and they probably support auto-rotation... but they are most definetely not tablets.
Updated•3 years ago
|
Updated•3 years ago
|
Comment 1•3 years ago
|
||
Probably not a S2
Reminder, S2 means: (Serious) Major functionality/product severely impaired or a high impact issue and a satisfactory workaround does not exist
One symptom of the "tablet conflation" issue is that Firefox on Windows treats touch devices as "not a tablet" if they lack an auto-rotation sensor (eg: for a desktop PC with a HDMI monitor with built-in USB HID touchscreen), which causes it to respond to media queries with pointer: fine
and hover: hover
, even when the touchscreen is the primary (or only!) pointing device, causing websites to present a non-touch-friendly UI.
Using a non-touch-friendly UI in a touch-only environment can severely impair use of the software, depending on the size of the display, quality and type of touch sensor, and how much fine motor control the user has.
A website could work-around incorrect browser behaviour by always using touch-friendly UI elements. While that should still be usable, it'd look and feel strange.
Comment 3•9 months ago
•
|
||
I agree with what Michael just said, but I'm going to take it a step further: The "tablet conflation" issue is an unforced error that we have created for ourselves.
It's worth remembering that there is no CSS Media Query that asks the question "Are you on a tablet?". Instead, they ask the questions "Is there a pointing device capable of pixel precision?" "Is there a pointing device capable of approximate pointing?", "Is there a pointing device that supports hovering?", and "Which of those is the primary pointing device capable of?".
But instead of answering those questions directly, we invent this other question for ourselves ("Is this a tablet?"), answer it wrong, and then base our answers to the actual questions we were asked on that wrong answer.
The correct answer to the question "Does your device support auto-rotation?" is "Who cares!? Nobody is asking us that question! Does your device have a touchscreen or not?!??!"
I agree with your sentiments here. It seems like "what is the primary device" is broken as a concept for desktop browsers and also some Android devices because of OEM misconfiguration (same issue impacts Firefox on Android).
I think it's dishonest for any desktop browser to claim it supports the Media Queries Level 4 specification... nobody seems to implement "primary" devices properly at all.
Digging into Windows issues further: the MSDN docs for GetSystemMetrics(SM_CONVERTIBLESLATEMODE)
have an interesting note (emphasis added):
Reflects the state of the laptop or slate mode, 0 for Slate Mode and non-zero otherwise. When this system metric changes, the system sends a broadcast message via WM_SETTINGCHANGE with "ConvertibleSlateMode" in the LPARAM. Note that this system metric doesn't apply to desktop PCs. In that case, use GetAutoRotationState.
Convertible Slate Mode seems to be designed for Microsoft Surface Pro-style devices where they're in tablet mode unless a keyboard/trackpad cover is attached to the device.
Windows 11 removed the explicit "tablet mode" switch, and replaced it with auto-detection. That auto-detection doesn't seem to trigger on a non-tablet PC with a touchscreen that doesn't support auto-rotation.
That wording of using auto-rotation status makes me think Windows 11 uses similar (faulty) logic.
The original Firefox patch for this feature on Windows notes:
Note that the logic of IsTabletDevice() and relevant stuff are mostly a mimic of IsTabletDevice [1] and IsDeviceUsedAsATablet [2] in chromium.
As for the primary pointer's capabilities, there seems no API to tell which device is the primary ...
While there may be no API for getting the primary pointing device on Windows, the browser could try to infer the primary device from the user's activities. A web page could attempt this as well... but the whole purpose of this media query is so that a web page doesn't need to do that! :)
As for a work-around, a web page could instead use any-pointer: coarse
as a signal that it should present a touch-friendly UI and alternatives to hovering. However, a Wiimote-type pointing device (using gyro and/or optical sensors to point at the screen from > 1.5 metres away) could be described as pointer: coarse
, hover: hover
.
I think it's dishonest for any desktop browser to claim it supports the Media Queries Level 4 specification... nobody seems to implement "primary" devices properly at all.
I have flagged this issue for MDN's browser compatibility data, and also added commentary to a related issue with the CSSWG about any-hover
being a useless signal.
Comment 6•9 months ago
|
||
I have flagged this issue for MDN's browser compatibility data, and also added commentary to a related issue with the CSSWG about any-hover being a useless signal.
Thanks for doing that!
It seems like "what is the primary device" is broken as a concept for desktop browsers and also some Android devices because of OEM misconfiguration (same issue impacts Firefox on Android).
I'm glad to see that I'm not the only one who thinks this. I did an entire write-up about the state of input devices on Firefox and other browsers. This section in particular talks about this a bit.
a web page could instead use any-pointer: coarse as a signal that it should present a touch-friendly UI
The longer I've considered this issue, the more I think that the entire "primary pointing device" concept is something that should be rarely (or possibly never) used by web developers. Myself, when I use the Surface Pro, I often alternate between using the touch screen for most interactions, but the keyboard whenever I have to type something. There are likely many artists that alternate between using their fingers and a graphic pen for input. It seems like the presence (and thus potential-use) of a pointing device should be more important to app developers then which one might be considered "primary".
IoW if I were a web dev, I would basically just say "Alright -- If you have any coarse pointing device attached to your computer, I'm going to display touch controls. If you have any fine pointing device attached to your computer, I'm going to display scroll bars. If you have both, I will display both."
Description
•